首页 | 官方网站   微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   939篇
  免费   102篇
  国内免费   27篇
工业技术   1068篇
  2024年   2篇
  2023年   43篇
  2022年   16篇
  2021年   49篇
  2020年   83篇
  2019年   86篇
  2018年   55篇
  2017年   44篇
  2016年   39篇
  2015年   30篇
  2014年   56篇
  2013年   70篇
  2012年   17篇
  2011年   32篇
  2010年   21篇
  2009年   25篇
  2008年   29篇
  2007年   37篇
  2006年   39篇
  2005年   29篇
  2004年   30篇
  2003年   28篇
  2002年   27篇
  2001年   25篇
  2000年   23篇
  1999年   11篇
  1998年   20篇
  1997年   29篇
  1996年   11篇
  1995年   8篇
  1994年   14篇
  1993年   6篇
  1992年   7篇
  1991年   6篇
  1990年   8篇
  1989年   2篇
  1988年   1篇
  1987年   1篇
  1986年   1篇
  1985年   2篇
  1984年   3篇
  1983年   2篇
  1981年   1篇
排序方式: 共有1068条查询结果,搜索用时 15 毫秒
1.
Liquid marble (LM) is a droplet that is wrapped by hydrophobic solid particles, which behave as a non-wetting soft solid. Based on these properties, LM can be applied in fluidics and soft device applications. A wide variety of functional particles have been synthesized to form functional LMs. However, the formation of multifunctional LMs by integrating several types of functional particles is challenging. Here, a general strategy for the flexible patterning of functional particles on droplet surfaces in a patchwork-like design is reported. It is shown that LMs can switch their macroscopic behavior between a stable and active state on super-repellent surfaces in situ by jamming/unjamming the surface particles. Active LMs hydrostatically coalesce to form a self-sorted particle pattern on the droplet surface. With the support of LM handling robotics, on-demand cyclic activation–manipulation–coalescence–stabilization protocols by LMs with different sizes and particle types result in the reliable design of multi-faced LMs. Based on this concept, a single bi-functional LM is designed from two mono-functional LMs as an advanced droplet carrier.  相似文献   
2.
Agricultural robots rely on semantic segmentation for distinguishing between crops and weeds to perform selective treatments and increase yield and crop health while reducing the amount of chemicals used. Deep‐learning approaches have recently achieved both excellent classification performance and real‐time execution. However, these techniques also rely on a large amount of training data, requiring a substantial labeling effort, both of which are scarce in precision agriculture. Additional design efforts are required to achieve commercially viable performance levels under varying environmental conditions and crop growth stages. In this paper, we explore the role of knowledge transfer between deep‐learning‐based classifiers for different crop types, with the goal of reducing the retraining time and labeling efforts required for a new crop. We examine the classification performance on three datasets with different crop types and containing a variety of weeds and compare the performance and retraining efforts required when using data labeled at pixel level with partially labeled data obtained through a less time‐consuming procedure of annotating the segmentation output. We show that transfer learning between different crop types is possible and reduces training times for up to 80%. Furthermore, we show that even when the data used for retraining are imperfectly annotated, the classification performance is within 2% of that of networks trained with laboriously annotated pixel‐precision data.  相似文献   
3.
Highly accurate real‐time localization is of fundamental importance for the safety and efficiency of planetary rovers exploring the surface of Mars. Mars rover operations rely on vision‐based systems to avoid hazards as well as plan safe routes. However, vision‐based systems operate on the assumption that sufficient visual texture is visible in the scene. This poses a challenge for vision‐based navigation on Mars where regions lacking visual texture are prevalent. To overcome this, we make use of the ability of the rover to actively steer the visual sensor to improve fault tolerance and maximize the perception performance. This paper answers the question of where and when to look by presenting a method for predicting the sensor trajectory that maximizes the localization performance of the rover. This is accomplished by an online assessment of possible trajectories using synthetic, future camera views created from previous observations of the scene. The proposed trajectories are quantified and chosen based on the expected localization performance. In this study, we validate the proposed method in field experiments at the Jet Propulsion Laboratory (JPL) Mars Yard. Furthermore, multiple performance metrics are identified and evaluated for reducing the overall runtime of the algorithm. We show how actively steering the perception system increases the localization accuracy compared with traditional fixed‐sensor configurations.  相似文献   
4.
The automatic design of controllers for mobile robots usually requires two stages. In the first stage, sensorial data are preprocessed or transformed into high level and meaningful values of variables which are usually defined from expert knowledge. In the second stage, a machine learning technique is applied to obtain a controller that maps these high level variables to the control commands that are actually sent to the robot. This paper describes an algorithm that is able to embed the preprocessing stage into the learning stage in order to get controllers directly starting from sensorial raw data with no expert knowledge involved. Due to the high dimensionality of the sensorial data, this approach uses Quantified Fuzzy Rules (QFRs), that are able to transform low-level input variables into high-level input variables, reducing the dimensionality through summarization. The proposed learning algorithm, called Iterative Quantified Fuzzy Rule Learning (IQFRL), is based on genetic programming. IQFRL is able to learn rules with different structures, and can manage linguistic variables with multiple granularities. The algorithm has been tested with the implementation of the wall-following behavior both in several realistic simulated environments with different complexity and on a Pioneer 3-AT robot in two real environments. Results have been compared with several well-known learning algorithms combined with different data preprocessing techniques, showing that IQFRL exhibits a better and statistically significant performance. Moreover, three real world applications for which IQFRL plays a central role are also presented: path and object tracking with static and moving obstacles avoidance.  相似文献   
5.
This paper presents a human–robot co-working system to be applied to industrial tasks such as the production line of a paint factory. The aim is to optimize the picking task with respect to manual operation in a paint factory. The use of an agile autonomous robot co-worker reduces the time in the picking process of materials, and the reduction of the exposure time to raw materials of the worker improves the human safety. Moreover, the process supervision is also improved thanks to a better traceability of the whole process. The whole system consists of a manufacturing process management system, an autonomous navigation system, and a people detection and tracking system. The localization module does not require the installation of reflectors or visual markers for robot operation, significantly simplifying the system deployment in a factory. The robot is able to respond to changing environmental conditions such as people, moving forklifts or unmapped static obstacles like pallets or boxes. The system is not tied to specific manufacturing orders. It is fully integrated with the manufacturing process management system and it can process all possible orders as long as their components are placed into the warehouse. Real experiments to validate the system have been performed in a paint factory by a real holonomic platform and a worker. The results are promising from the evaluation of performance indicators such as exposure time of the worker to raw materials, automation of the process, robust and safe navigation, and the assessment of the end-user.  相似文献   
6.
Soft robots built with active soft materials have been increasingly attractive. Despite tremendous efforts in soft sensors and actuators, it remains extremely challenging to construct intelligent soft materials that simultaneously actuate and sense their own motions, resembling living organisms’ neuromuscular behaviors. This work presents a soft robotic strategy that couples actuation and strain-sensing into a single homogeneous material, composed of an interpenetrating double-network of a nanostructured thermo-responsive hydrogel poly(N-isopropylacrylamide) (PNIPAAm) and a light-absorbing, electrically conductive polymer polypyrrole (PPy). This design grants the material both photo/thermal-responsiveness and piezoresistive-responsiveness, enabling remotely-triggered actuation and local strain-sensing. This self-sensing actuating soft material demonstrated ultra-high stretchability (210%) and large volume shrinkage (70%) rapidly upon irradiation or heating (13%/°C, 6-time faster than conventional PNIPAAm). The significant deswelling of the hydrogel network induces densification of percolation in the PPy network, leading to a drastic conductivity change upon locomotion with a gauge factor of 1.0. The material demonstrated a variety of precise and remotely-driven photo-responsive locomotion such as signal-tracking, bending, weightlifting, object grasping and transporting, while simultaneously monitoring these motions itself via real-time resistance change. The multifunctional sensory actuatable materials may lead to the next-generation soft robots of higher levels of autonomy and complexity with self-diagnostic feedback control.  相似文献   
7.
The World Robot Summit is a robot Olympics and aims to be held in a different country every four years from 2020. The concept of the Plant Disaster Prevention challenge is daily inspections, checks, and emergency response in industrial plants, and in this competition, robots must carry out these types of missions in a mock-up plant. The concept of the Tunnel Disaster Response and Recovery challenge is emergency response to tunnel disasters, and is a simulation competition whereby teams compete to show their ability to deal with disasters, by collecting information and removing debris. The Standard Disaster Robotics challenge assesses, in the form of a contest, the standard performance levels of a robot that are necessary for disaster prevention and emergency response. The World Robot Summit Preliminary Competition was held at Tokyo Big Sight in October 2018, and 36 teams participated in the Disaster Robotics Category. UGVs and UAVs contended the merits of new technology for solving complex problems, using core technologies such as mobility, sensing, recognition, performing operations, human interface, autonomous intelligence etc., as well as system integration and implementation of strategies for completing missions, gaining high-level results.  相似文献   
8.
The shape-shifting behavior of liquid crystal networks (LCNs) and elastomers (LCEs) is a result of an interplay between their initial geometrical shape and their molecular alignment. For years, reliance on either one-step in situ or two-step film processing techniques has limited the shape-change transformations from 2D to 3D geometries. The combination of various fabrication techniques, alignment methods, and chemical formulations developed in recent years has introduced new opportunities to achieve 3D-to-3D shape-transformations in large scales, albeit the precise control of local molecular alignment in microscale 3D constructs remains a challenge. Here, the voxel-by-voxel encoding of nematic alignment in 3D microstructures of LCNs produced by two-photon polymerization using high-resolution topographical features is demonstrated. 3D LCN microstructures (suspended films, coils, and rings) with designable 2D and 3D director fields with a resolution of 5 µm are achieved. Different shape transformations of LCN microstructures with the same geometry but dissimilar molecular alignments upon actuation are elicited. This strategy offers higher freedom in the shape-change programming of 3D LCN microstructures and expands their applicability in emerging technologies, such as small-scale soft robots and devices and responsive surfaces.  相似文献   
9.
10.
Disassembly is a key step for an efficient treatment of end-of-life (EOL) products. A principle of cognitive robotics is implemented to address the problem regarding uncertainties and variations in the automatic disassembly process. In this article, advanced behaviour control based on two cognitive abilities, namely learning and revision, are proposed. The knowledge related to the disassembly process of a particular model of product is learned by the cognitive robotic agent (CRA) and will be implemented when the same model has been seen again. This knowledge is able to be used as a disassembly sequence plan (DSP) and disassembly process plan (DPP). The agent autonomously learns by reasoning throughout the process. In case of an unresolved condition, human assistance is given and the corresponding knowledge will be learned by demonstration. The process can be performed more efficiently by applying a revision strategy that optimises the operation plans. As a result, the performance of the process regarding time and level of autonomy are improved. The validation was done on various models of a case-study product, Liquid Crystal Display (LCD) screen.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司    京ICP备09084417号-23

京公网安备 11010802026262号